impact assessment
Coupling Agent-based Modeling and Life Cycle Assessment to Analyze Trade-offs in Resilient Energy Transitions
Zhang, Beichen, Zaki, Mohammed T., Breunig, Hanna, Ajami, Newsha K.
Transitioning to sustainable and resilient energy systems requires navigating complex and interdependent trade-offs across environmental, social, and resource dimensions. Neglecting these trade-offs can lead to unintended consequences across sectors. However, existing assessments often evaluate emerging energy pathways and their impacts in silos, overlooking critical interactions such as regional resource competition and cumulative impacts. We present an integrated modeling framework that couples agent-based modeling and Life Cycle Assessment (LCA) to simulate how energy transition pathways interact with regional resource competition, ecological constraints, and community-level burdens. We apply the model to a case study in Southern California. The results demonstrate how integrated and multiscale decision making can shape energy pathway deployment and reveal spatially explicit trade-offs under scenario-driven constraints. This modeling framework can further support more adaptive and resilient energy transition planning on spatial and institutional scales.
- North America > United States > California > Riverside County (0.14)
- North America > United States > California > Imperial County (0.14)
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- (7 more...)
Position Paper: If Innovation in AI Systematically Violates Fundamental Rights, Is It Innovation at All?
Castañeira, Josu Eguiluz, Brando, Axel, Laukyte, Migle, Serra-Vidal, Marc
Artificial intelligence (AI) now permeates critical infrastructures and decision-making systems where failures produce social, economic, and democratic harm. This position paper challenges the entrenched belief that regulation and innovation are opposites. As evidenced by analogies from aviation, pharmaceuticals, and welfare systems and recent cases of synthetic misinformation, bias and unaccountable decision-making, the absence of well-designed regulation has already created immeasurable damage. Regulation, when thoughtful and adaptive, is not a brake on innovation -- it is its foundation. The present position paper examines the EU AI Act as a model of risk-based, responsibility-driven regulation that addresses the Collingridge Dilemma: acting early enough to prevent harm, yet flexibly enough to sustain innovation. Its adaptive mechanisms -- regulatory sandboxes, small and medium enterprises (SMEs) support, real-world testing, fundamental rights impact assessment (FRIA) -- demonstrate how regulation can accelerate responsibly, rather than delay, technological progress. The position paper summarises how governance tools transform perceived burdens into tangible advantages: legal certainty, consumer trust, and ethical competitiveness. Ultimately, the paper reframes progress: innovation and regulation advance together. By embedding transparency, impact assessments, accountability, and AI literacy into design and deployment, the EU framework defines what responsible innovation truly means -- technological ambition disciplined by democratic values and fundamental rights.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Austria > Vienna (0.14)
- Europe > Slovakia (0.04)
- (12 more...)
- Research Report (0.64)
- Overview (0.46)
- Transportation > Air (1.00)
- Law > Statutes (1.00)
- Law Enforcement & Public Safety (1.00)
- (5 more...)
Ministers to amend data bill amid artists' concerns over AI and copyright
Artists including Paul McCartney and Tom Stoppard have thrown their weight behind a campaign against the changes in a series of high-level interventions. The government's commitments will be made in amendments to the data bill, which has become a vehicle for campaigners against the changes and is due to return to the Commons on Wednesday next week. The move has already been dismissed by critics. Ed Newton-Rex, a the British composer and prominent campaigner against the government proposals, said there was a "ton of evidence" showing the mooted changes were "terrible for creators". He added: "We don't need an impact assessment to tell us this."
- Europe > United Kingdom (0.18)
- Asia > Middle East > Saudi Arabia (0.05)
- Government > Regional Government (0.75)
- Media > Music (0.51)
Assessing employment and labour issues implicated by using AI
Willems, Thijs, Hotan, Darion Jin, Tang, Jiawen Cheryl, Norhashim, Norakmal Hakim bin, Poon, King Wang, Goh, Zi An Galvyn, Vinod, Radha
This chapter critiques the dominant reductionist approach in AI and work studies, which isolates tasks and skills as replaceable components. Instead, it advocates for a systemic perspective that emphasizes the interdependence of tasks, roles, and workplace contexts. Two complementary approaches are proposed: an ethnographic, context-rich method that highlights how AI reconfigures work environments and expertise; and a relational task-based analysis that bridges micro-level work descriptions with macro-level labor trends. The authors argue that effective AI impact assessments must go beyond predicting automation rates to include ethical, well-being, and expertise-related questions. Drawing on empirical case studies, they demonstrate how AI reshapes human-technology relations, professional roles, and tacit knowledge practices. The chapter concludes by calling for a human-centric, holistic framework that guides organizational and policy decisions, balancing technological possibilities with social desirability and sustainability of work.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Asia > Singapore (0.05)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- (2 more...)
HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT
Ceravolo, Paolo, Damiani, Ernesto, D'Amico, Maria Elisa, Erb, Bianca de Teffe, Favaro, Simone, Fiano, Nannerel, Gambatesa, Paolo, La Porta, Simone, Maghool, Samira, Mauri, Lara, Panigada, Niccolo, Vaquer, Lorenzo Maria Ratto, Tamborini, Marta A.
This paper introduces the HH4AI Methodology, a structured approach to assessing the impact of AI systems on human rights, focusing on compliance with the EU AI Act and addressing technical, ethical, and regulatory challenges. The paper highlights AIs transformative nature, driven by autonomy, data, and goal-oriented design, and how the EU AI Act promotes transparency, accountability, and safety. A key challenge is defining and assessing "high-risk" AI systems across industries, complicated by the lack of universally accepted standards and AIs rapid evolution. To address these challenges, the paper explores the relevance of ISO/IEC and IEEE standards, focusing on risk management, data quality, bias mitigation, and governance. It proposes a Fundamental Rights Impact Assessment (FRIA) methodology, a gate-based framework designed to isolate and assess risks through phases including an AI system overview, a human rights checklist, an impact assessment, and a final output phase. A filtering mechanism tailors the assessment to the system's characteristics, targeting areas like accountability, AI literacy, data governance, and transparency. The paper illustrates the FRIA methodology through a fictional case study of an automated healthcare triage service. The structured approach enables systematic filtering, comprehensive risk assessment, and mitigation planning, effectively prioritizing critical risks and providing clear remediation strategies. This promotes better alignment with human rights principles and enhances regulatory compliance.
- North America > United States (0.46)
- Europe > Italy > Lombardy > Milan (0.05)
- South America > Brazil (0.04)
- (2 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.46)
Planners recommended against nuclear plant in 2019 citing fears for Welsh language
Planning inspectors recommended against a Hitachi-built nuclear power plant in Anglesey on the basis that it could dilute the island's Welsh language and culture, it has emerged. Hitachi scrapped plans to build a 20bn nuclear power plant at Wylfa in 2020 over cost concerns after failing to reach a funding agreement with UK ministers. Keir Starmer's government has vowed to make it easier to build major infrastructure projects by reforming the planning system and stopping campaigners from launching "excessive" legal challenges. The prime minister unveiled plans for a historic expansion in nuclear power this week, vowing to "push past nimbyism" and make sites across the country available for new power stations. Nuclear industry figures believe that the fate of Hitachi's proposed plant at Wylfa demonstrates the problems with the UK's planning system.
Developing an Ontology for AI Act Fundamental Rights Impact Assessments
Rintamaki, Tytti, Pandit, Harshvardhan J.
The recently published EU Artificial Intelligence Act (AI Act) is a landmark regulation that regulates the use of AI technologies. One of its novel requirements is the obligation to conduct a Fundamental Rights Impact Assessment (FRIA), where organisations in the role of deployers must assess the risks of their AI system regarding health, safety, and fundamental rights. Another novelty in the AI Act is the requirement to create a questionnaire and an automated tool to support organisations in their FRIA obligations. Such automated tools will require a machine-readable form of information involved within the FRIA process, and additionally also require machine-readable documentation to enable further compliance tools to be created. In this article, we present our novel representation of the FRIA as an ontology based on semantic web standards. Our work builds upon the existing state of the art, notably the Data Privacy Vocabulary (DPV), where similar works have been established to create tools for GDPR's Data Protection Impact Assessments (DPIA) and other obligations. Through our ontology, we enable the creation and management of FRIA, and the use of automated tool in its various steps.
- North America > Canada (0.28)
- Europe > Switzerland (0.04)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- Europe > Germany > Bavaria > Regensburg (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
Assessing the Impact of Conspiracy Theories Using Large Language Models
Jiang, Bohan, Li, Dawei, Tan, Zhen, Zhou, Xinyi, Rao, Ashwin, Lerman, Kristina, Bernard, H. Russell, Liu, Huan
Measuring the relative impact of CTs is important for prioritizing responses and allocating resources effectively, especially during crises. However, assessing the actual impact of CTs on the public poses unique challenges. It requires not only the collection of CT-specific knowledge but also diverse information from social, psychological, and cultural dimensions. Recent advancements in large language models (LLMs) suggest their potential utility in this context, not only due to their extensive knowledge from large training corpora but also because they can be harnessed for complex reasoning. In this work, we develop datasets of popular CTs with human-annotated impacts. Borrowing insights from human impact assessment processes, we then design tailored strategies to leverage LLMs for performing human-like CT impact assessments. Through rigorous experiments, we textit{discover that an impact assessment mode using multi-step reasoning to analyze more CT-related evidence critically produces accurate results; and most LLMs demonstrate strong bias, such as assigning higher impacts to CTs presented earlier in the prompt, while generating less accurate impact assessments for emotionally charged and verbose CTs.
- Asia > Myanmar > Tanintharyi Region > Dawei (0.05)
- North America > United States > Arizona > Maricopa County > Tempe (0.05)
- North America > United States > California (0.04)
- (8 more...)
- Social Sector (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.69)
- Media > News (0.68)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.47)
The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template
What is the context which gave rise to the obligation to carry out a Fundamental Rights Impact Assessment (FRIA) in the AI Act? How has assessment of the impact on fundamental rights been framed by the EU legislator in the AI Act? What methodological criteria should be followed in developing the FRIA? These are the three main research questions that this article aims to address, through both legal analysis of the relevant provisions of the AI Act and discussion of various possible models for assessment of the impact of AI on fundamental rights. The overall objective of this article is to fill existing gaps in the theoretical and methodological elaboration of the FRIA, as outlined in the AI Act. In order to facilitate the future work of EU and national bodies and AI operators in placing this key tool for human-centric and trustworthy AI at the heart of the EU approach to AI design and development, this article outlines the main building blocks of a model template for the FRIA. While this proposal is consistent with the rationale and scope of the AI Act, it is also applicable beyond the cases listed in Article 27 and can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
- North America > Canada (0.14)
- Europe > Italy > Piedmont > Turin Province > Turin (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (16 more...)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.93)
- (2 more...)
Towards Leveraging News Media to Support Impact Assessment of AI Technologies
Allaham, Mowafak, Kieslich, Kimon, Diakopoulos, Nicholas
Expert-driven frameworks for impact assessments (IAs) may inadvertently overlook the effects of AI technologies on the public's social behavior, policy, and the cultural and geographical contexts shaping the perception of AI and the impacts around its use. This research explores the potentials of fine-tuning LLMs on negative impacts of AI reported in a diverse sample of articles from 266 news domains spanning 30 countries around the world to incorporate more diversity into IAs. Our findings highlight (1) the potential of fine-tuned open-source LLMs in supporting IA of AI technologies by generating high-quality negative impacts across four qualitative dimensions: coherence, structure, relevance, and plausibility, and (2) the efficacy of small open-source LLM (Mistral-7B) fine-tuned on impacts from news media in capturing a wider range of categories of impacts that GPT-4 had gaps in covering.
- North America > Canada (0.04)
- Oceania > Australia (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (11 more...)
- Media > News (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (2 more...)